Blog
Explore Blog Posts
Large Language Models in Healthcare: Are We There Yet?
While these tools show potential in clinical practice, we urgently need a systematic approach to evaluation.
The Disinformation Machine: How Susceptible Are We to AI Propaganda?
With a bit of prodding, AI-generated propaganda is more effective than propaganda written by humans.
Improving Equity and Access to Non-English Large Language Models
The lessons learned from the fine-tuning and evaluation of Vietnamese LLMs could help broaden access to models beyond...
Inside The New AI Index: Expensive New Models, Targeted Investments, and More
The new report covers major AI trends in technical advances, regulation, education, economics, and global politics.
AI Index: State of AI in 13 Charts
In the new report, foundation models dominate, benchmarks fall, prices skyrocket, and on the global stage, the U.S....
AI Index: Five Trends in Frontier AI Research
The new AI Index spots major advances in multimodal models, robotics, generative AI, and more.
Building a Social Media Algorithm That Actually Promotes Societal Values
A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.
Where Generative AI Meets Human Rights
Experts in technology, law, and human rights debate the unique implications of this technology and how we might best...
Why Large Language Models Like ChatGPT Treat Black- and White-Sounding Names Differently
A new study shows systemic issues in some of the most popular models.
Privacy in an AI Era: How Do We Protect Our Personal Information?
A new report analyzes the risks of AI and offers potential solutions.
Stanford HAI at Five: Pioneering the Future of Human-Centered AI
In its fifth year, HAI catalyzed a multidisciplinary community of researchers, industry, policy, and civil society to...
Who’s at Fault when AI Fails in Health Care?
Hospitals are increasingly adopting AI tools for patient care. They need to be thinking about liability.